Computer Aided Formal Reasoning (@ Nottingham)

We'll be peeling away that disguise and showing how you can integrate programming and proof in a single system, if you happen to have a functional language with an expressive type system handy...

Have you guessed? It's an Epigram course module from the University of Nottingham.

What you will find in the linked page is a set of exercises which consist of downloadable Epigram files for your enjoyment.

Delimited Control for PLT Scheme

Via the PLT Scheme mailing list:

MzScheme's support for prompts and composable continuations most
closely resembles Dorai Sitaram's tagged `%' and `fcontrol' operators;
see "Handling Control", PLDI'93 [previously mentioned at LtU], or see Dorai's dissertation.

You can read about the primitive functionality of delimited continuations in the language manual. You can read about all the different control operators available in the new control.ss library in library manual. Among them: %, fcontrol, prompt, control, prompt-at, control-at, reset, shift, reset-at, shift-at, prompt0, reset0, control0, shift0, prompt0-at, reset0-at, control0-at, shift0-at, set, cupto.

(To prevent any confusion: this change is in the latest SVN but not yet a final release.)

Good Ideas, Through the Looking Glass

Niklaus Wirth. Good Ideas, Through the Looking Glass, IEEE Computer, Jan. 2006, pp. 56-68.

An entire potpourri of ideas is listed from the past decades of Computer Science and Computer Technology. Widely acclaimed at their time, many have lost in splendor and brilliance under today’s critical scrutiny. We try to find reasons. Some of the ideas are almost forgotten. But we believe that they are worth recalling, not the least because one must try to learn from the past, be it for the sake of progress, intellectual stimulation, or fun.

A personal look at some ideas, mostly from the field of programming languages. Some of Wirth's objections are amusing, some infuriating - and some I agree with...

LtU readers will obviously go directly to sections 4 (Programming Language Features) and 6 (Programming Paradigms). Here are a few choice quotes:

It has become fashionable to regard notation as a secondary issue depending purely on personal taste. This may partly be true; yet the choice of notation should not be considered an arbitrary matter. It has consequences, and it reveals the character of a language. [Wirth goes on to discuss = vs. == in C...]

Enough has been said and written about this non-feature [goto] to convince almost everyone that it is a primary example of a bad idea. The designer of Pascal retained the goto statement (as well as the if statement without closing end symbol). Apparently he lacked the courage to break with convention and made wrong concessions to traditionalists. But that was in 1968. By now, almost everybody has understood the problem, but apparently not the designers of the latest commercial programming languages, such as C#.

The concept that languages serve to communicate between humans had been completely blended out, as apparently everyone could now define his own language on the fly. The high hopes, however, were soon damped by the difficulties encountered when trying to specify, what these private constructions should mean. As a consequence, the intreaguing idea of extensible languages faded away rather quickly.

LtU readers are also going to "enjoy" what Wirth has to say about functional programming...

(Thanks Tristram)

A Madman Dreams of Turing Machines

Here's something a little unusual... a novel.

A Madman Dreams of Turing Machines. Janna Levin.

In this remarkable work of fiction, astrophysicist Janna Levin reimagines the lives of two of the most important and influential minds of our time.

The narrator is a scientist herself, a physicist obsessed with Kurt Gödel, the greatest logician of many centuries, and with Alan Turing, the extraordinary mathematician, breaker of the Enigma Code during World War II.

The potential interest to this audience should be fairly obvious, although the book is centered at least as much on their substantial personal struggles as on their work. Here is the New York Times review, perhaps it would be nice to have an LtU review as well...

Flapjax - Functional Reactive Ajax

Flapjax s a functional reactive compiler and Javascript library for creating interactive web GUIs, created by Brown PLT. You can try the compiler online, or read the docs to get more of an idea of how the language works. If you've used OO style MVC before, but not functional reactive programming (FRP), take a look at Flapjax for a different, and in my opinion, cleaner approach to the same problem.

If you're familiar with FrTime you'll recognise the elements of Flapjax -- Flapjax is essentially a Javascript implementation of FrTime, and a compiler from the Flapjax language to Javascript to make writing code less verbose. The key differences compared to the Haskell FRP tradition are that FrTime is asynchronous and uses mutable state. This means behaviours (time varying values) can be sampled at any time, whereas in Haskell FRP behaviours are all sampled at the same time, and it isn't necessary to use the arrow combinators to hide accumulators. Of course it isn't all roses: the implementation is more complex, and doesn't work well in, say, a continuation based web server as mutable state will mess up resumptions of previous continuations. (Don't get the idea that this is a problem for Flapjax -- it runs on the client, not the server.)

LogFun - Building Logics by Composing Functors

Logic Functors: A Toolbox of Components for Building Customized and Embeddable Logics - Sébastien Ferré and Olivier Ridoux :

Logfun is a library of logic functors. A logic functor is a function that can be applied to zero, one or several logics so as to produce a new logic as a combination of argument logics. Each argument logic can itself be built by combination of logic functors. The signature of a logic is made of a parser and a printer of formulas, logical operations such as a theorem prover for entailment between formulas, and more specific operations required by Logical Information Systems (LIS). Logic functors can be concrete domains like integers, strings, or algebraic combinators like product or sum of logics.

Logic functors are coded as Objective Caml modules. A logic semantics is associated to each of these logic functors. This enables to define properties of logics like the consistency and completeness of the entailment prover, and to prove under which conditions a generated entailement prover satisfies these properties given the properties of argument logics.

Here's a bit more from the documentation (PDF):
Logic Functors form a framework for specifying new logics, and deriving automatically theorem provers and consistency/completeness diagnoses. Atomic functors are logics for manipulating symbols and concrete domains, while other functors are logic transformers that may add connectives or recursive structures, or may alter the semantics of a logic. The semantic structure of the framework is model theoretic as opposed to the verifunctional style often used in classical logic. This comes close to the semantics of description logics, and we show indeed that the logic ALC can be rebuilt using logic functors. This offers the immediate advantage that variants of ALC can be explored and implemented almost for free.

The use of OCaml functors here may be interesting even for those who aren't into logic languages. The system allows new logics to be created in OCaml itself, by simply composing OCaml functors. This is covered further in Implementation as ML Functors. The quickest way to get a feel for what this system does is to look at the examples.


(If the story subject line looks familiar to anyone, I was recently reading Guy Steele's Building Interpreters by Composing Monads...)

Run time type checking

A couple of items that are loosely related. First, Sun is updating the Java Class File Specification.

The main difference is the introduction of a new verification scheme based on type checking rather than type inference. This scheme has inherent advantages in performance (both space (on the order of 90% savings) and time (approximately 2x)) over the previous approach. It is also simpler and more robust, and helps pave the way for future evolution of the platform.
I haven't read through the fine print of the changes yet - at some point I need to update my class file disassembler - but then I still haven't gotten around to the changes in Java 5 which were much less involved. One thing that I found interesting was the use of Prolog in the specification (predicates and horn clauses).
The type rules that the typechecker enforces are specified by means of Prolog clauses. English language text is used to describe the type rules in an informal way, while the Prolog code provides a formal specification.
On a different note, there is a new thesis paper on A Semantics For Lazy Types related to the Alice ML project. The specifications for runtime type checking are in the more formal operational specs of ML and the runtime type checking is more ambitious than that associated with Java.
Type analysis needs to compare dynamic types, which have to be computed from (probably higher-order) type expressions that may contain types from other modules. If lazy linking has delayed the loading of these modules so far, then the corresponding types are represented by free type variables. For the type-level computation in our model this means that it may encounter free type variables whose denotation depends on yet unevaluated term-level expressions. To proceed, it inevitably has to eliminate these variables by triggering the necessary evaluations on term-level. In other words: type-level computation has to account for lazy types. We give two strategies for this and show soundness properties for the resulting calculi.
In general, it means you still have to with dynamic type checking in static languages - though the meaning is different than the runtime typing that occurs with dynamic PLs.

Scott Rosenberg: Code Reads

This is the inaugural edition of Code Reads, a weekly discussion of some of the central essays, documents and texts in the history of software. This week we're talking about Frederick Brooks's The Mythical Man-Month. (OK, let's be honest: I'm talking about it. I'm hoping you, or you, or you, may want to, as well!

This is an ongoing series with Dijkstra's "Go To Statement Considered Harmful" coming up. This essay was mentioned here a few times, of course, so you might want to check the archives.

This item is not directly language related, but since you can win prizes, I thought I'd better let you guys know..

A Stepper for Scheme Macros

A Stepper for Scheme Macros. Ryan Culpepper, Matthias Felleisen.

In this paper, we present a macro debugger with full support for modern Scheme macros. To construct the debugger, we have extended the macro expander so that it issues a series of expansion events. A parser turns these event streams into derivations in a natural semantics for macro expansion. From these derivations, the debugger extracts a reduction-sequence (stepping) view of the expansion. A programmer can specify with simple policies which parts of a derivation to omit and which parts to show. Last but not least, the debugger includes a syntax browser that graphically displays the various pieces of information that the expander attaches to syntactic tokens.

Another paper from the Scheme workshop.

Apart from being a nice exercise for macro lovers, a good macro debugger is essential for lowering the barrier to macro programming. Since macro programming is an important technique for building DSELs and for language oriented programming in general, having better macro debugging facilities is a Good Thing.

A Very Modal Model of a Modern, Major, General Type System

A Very Modal Model of a Modern, Major, General Type System, by Andrew W. Appel, Paul-Andre Mellies, Christopher D. Richards, and Jerome Vouillon. Preliminary version of August 10, 2006.

We wish to compile languages such as ML and Java into typed intermediate languages and typed assembly languages. These TILs and TALs are particularly difficult to design, because in order to describe the program transformations applied in the course of compilation, they require a very rich and expressive type system... Putting all these type ingredients together in a low-level language is an intricate exercise. A formal proof of soundness —any well-typed program does not go wrong—is thus recommended for any type system for such TILs and TALs.

It has been awhile since we discussed work in this area. The current paper is quite intriacte, it seems, and I don't have the time to read it carefully. Maybe someone else would care to elaborate. The paper makes a few technical innovations, and uses several interesting techniques. Soundness is not proved syntactically, but rather semantically.

Some LtU member will be happy to see that the authors use Coq to formalize their proofs.